Skip to content

Conversation

@ningmingxiao
Copy link
Contributor

@ningmingxiao ningmingxiao commented Nov 13, 2025

fix /var/lib/nerdctl dir is full and container's log can't continue even space is freed up. @AkihiroSuda

containerd loguri=file containerd/containerd#12488 also have this problem

@ningmingxiao ningmingxiao changed the title fix: Ensure logs are not interrupted, even if the disk becomes full a… fix: Ensure logs are not interrupted Nov 13, 2025
@ningmingxiao ningmingxiao changed the title fix: Ensure logs are not interrupted fix: Ensure logs are not interrupted when disk is full Nov 13, 2025
func (pw *discardWriter) Write(p []byte) (int, error) {
n, err := pw.writer.Write(p)
if err != nil && errors.As(err, new(syscall.Errno)) {
return len(p), nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What are you trying to do in this PR?
Doesn't seem correct to discard all syscall errors

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry, I make a mistack. @AkihiroSuda

Copy link
Member

@AkihiroSuda AkihiroSuda left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


func (pw *discardWriter) Write(p []byte) (int, error) {
n, err := pw.writer.Write(p)
if err != nil && errors.Is(err, syscall.ENOSPC) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the case of ENOSPC, the write should be retried with sleep?

Copy link
Contributor Author

@ningmingxiao ningmingxiao Nov 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I afraid that if long term retry failed the container main process may hang, because pipe is full.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to retry forever

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how long do you think is appropriate? @AkihiroSuda

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If retry with some sleep I afraid that the consumer's consumption of container logs is slower than the producer's production of logs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants